Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
1.0 - 4.0 years
1 - 4 Lacs
Indore, Madhya Pradesh, India
On-site
Contract Duration: 6 Months Responsibilities: Snowflake Administration & Development in Data Warehouse, ETL, and BI projects. End-to-end implementation of Snowflake cloud data warehouse and on-premise data warehouse solutions (Oracle/SQL Server). Expertise in Snowflake: Data modeling, ELT using Snowflake SQL, complex stored procedures, and standard DWH/ETL concepts. Advanced features: resource monitors, RBAC controls, virtual warehouse sizing, query performance tuning. Zero copy clone, time travel, and data sharing deployment. Hands-on experience with Snowflake utilities: SnowSQL, SnowPipe, Big Data model techniques using Python. Data Migration from RDBMS to Snowflake cloud data warehouse. Deep understanding of relational and NoSQL data stores, including star and snowflake dimensional modeling. Data security & access controls design expertise. Experience with AWS/Azure data storage and management technologies (S3, Blob). Process development for data transformation, metadata, dependency, and workload management. Proficiency in RDBMS: Complex SQL, PL/SQL, Unix Shell Scripting, performance tuning, and troubleshooting. Problem resolution for complex data pipeline issues, proactively and as they arise. Agile development methodologies experience. Good-to-Have Skills: CI/CD in Talend using Jenkins and Nexus. TAC configuration with LDAP, Job servers, Log servers, and databases. Job conductor, scheduler, and monitoring expertise. GIT repository management, including user roles and access control. Agile methodology and 24/7 Admin & Platform support. Effort estimation based on requirements. Strong written communication skills, effective and persuasive in both written and oral communication.
Posted 1 day ago
1.0 - 4.0 years
1 - 4 Lacs
Hyderabad / Secunderabad, Telangana, Telangana, India
On-site
Contract Duration: 6 Months Responsibilities: Snowflake Administration & Development in Data Warehouse, ETL, and BI projects. End-to-end implementation of Snowflake cloud data warehouse and on-premise data warehouse solutions (Oracle/SQL Server). Expertise in Snowflake: Data modeling, ELT using Snowflake SQL, complex stored procedures, and standard DWH/ETL concepts. Advanced features: resource monitors, RBAC controls, virtual warehouse sizing, query performance tuning. Zero copy clone, time travel, and data sharing deployment. Hands-on experience with Snowflake utilities: SnowSQL, SnowPipe, Big Data model techniques using Python. Data Migration from RDBMS to Snowflake cloud data warehouse. Deep understanding of relational and NoSQL data stores, including star and snowflake dimensional modeling. Data security & access controls design expertise. Experience with AWS/Azure data storage and management technologies (S3, Blob). Process development for data transformation, metadata, dependency, and workload management. Proficiency in RDBMS: Complex SQL, PL/SQL, Unix Shell Scripting, performance tuning, and troubleshooting. Problem resolution for complex data pipeline issues, proactively and as they arise. Agile development methodologies experience. Good-to-Have Skills: CI/CD in Talend using Jenkins and Nexus. TAC configuration with LDAP, Job servers, Log servers, and databases. Job conductor, scheduler, and monitoring expertise. GIT repository management, including user roles and access control. Agile methodology and 24/7 Admin & Platform support. Effort estimation based on requirements. Strong written communication skills, effective and persuasive in both written and oral communication.
Posted 1 day ago
12.0 - 22.0 years
3 - 6 Lacs
Chennai, Tamil Nadu, India
On-site
We are hiring an ESA Solution Architect - COE for a CMMI Level 5 client. If you have relevant experience and are looking for a challenging opportunity, we invite you to apply. Key Responsibilities: Design and implement enterprise solutions that align with business and technical requirements. Lead migration projects from on-premise to cloud or cloud-to-cloud (preferably Snowflake). Provide expertise in ETL technologies such as Informatica, Matillion, and Talend . Develop Snowflake-based solutions and optimize data architectures. Analyze project constraints, mitigate risks, and recommend process improvements. Act as a liaison between technical teams and stakeholders , translating business needs into technical solutions. Conduct architectural system evaluations to ensure scalability and efficiency. Define processes and procedures to streamline solution delivery. Create solution prototypes and participate in technology selection . Ensure compliance with strategic guidelines, technical standards, and business objectives. Oversee solution development and collaborate closely with project management and IT teams. Required Skills & Experience: 10+ years of experience in technical solutioning and enterprise solution architecture. Proven experience in cloud migration projects (on-prem to cloud/cloud-to-cloud). Strong expertise in Snowflake architecture and solutioning . Hands-on experience with ETL tools such as Informatica, Matillion, and Talend . Excellent problem-solving and risk mitigation skills. Ability to work with cross-functional teams and align technical solutions with business goals. If you are interested, please share your updated profile.
Posted 1 day ago
7.0 - 15.0 years
20 - 36 Lacs
Chennai, Tamil Nadu, India
On-site
Bounteous x Accolite is a premier end-to-end digital transformation consultancy dedicated to partnering with ambitious brands to create digital solutions for today's complex challenges and tomorrow's opportunities. With uncompromising standards for technical and domain expertise, we deliver innovative and strategic solutions in Strategy, Analytics, Digital Engineering, Cloud, Data & AI, Experience Design, and Marketing. Our Co-Innovation methodology is a unique engagement model designed to align interests and accelerate value creation. Our clients worldwide benefit from the skills and expertise of over 4,000+ expert team members across the Americas, APAC, and EMEA. By partnering with leading technology providers, we craft transformative digital experiences that enhance customer engagement and drive business success. About Bounteous ( https://www.bounteous.com/ ) Founded in 2003 in Chicago, Bounteous is a leading digital experience consultancy that co-innovates with the world's most ambitious brands to create transformative digital experiences. With services in Strategy, Experience Design, Technology, Analytics and Insight, and Marketing, Bounteous elevates brand experiences through technology partnerships and drives superior client outcomes. For more information, please visit www.bounteous.com Information Security Responsibilities Promote and enforce awareness of key information security practices, including acceptable use of information assets, malware protection, and password security protocols Identify, assess, and report security risks, focusing on how these risks impact the confidentiality, integrity, and availability of information assets Understand and evaluate how data is stored, processed, or transmitted, ensuring compliance with data privacy and protection standards (GDPR, CCPA, etc.) Ensure data protection measures are integrated throughout the information lifecycle to safeguard sensitive information Preferred Qualifications 7+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or another quantitative field. Working knowledge ofETL technology - Talend / Apache Ni-fi / AWS Glue Experience with relational SQL and NoSQL databases Experience with big data tools: Hadoop, Spark, Kafka, etc.(Nice to have) Advanced Alteryx Designer (Mandatory at this point - relaxing that would be tough) Tableau Dashboarding AWS (familiarity with Lambda, EC2, AMI) Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.(Nice to have) Experience with cloud services: EMR, RDS, Redshift or Snowflake Experience with stream-processing systems: Storm, Spark-Streaming, etc.(Nice to have) Experience with object-oriented/object function scripting languages: Python, Java, Scala, etc. Responsibilities Work with Project Managers, Senior Architects and other team members from Bounteous & Client teams to evaluate data systems and project requirements In cooperation with platform developers, develop scalable and fault-tolerant Extract Transform Load (ETL) and integration systems for various data platforms which can operate at appropriate scale; meeting security, logging, fault tolerance and alerting requirements. Work on Data Migration Projects. Effectively communicate data requirements of various data platforms to team members Evaluate and document existing data ecosystems and platform capabilities Configure CI/CD pipelines Implement proposed architecture and assist in infrastructure setup We invite you to stay connected with us by subscribing to our monthly job openings alert here . Research shows that women and other underrepresented groups apply only if they meet 100% of the criteria of a job posting. If you have passion and intelligence, and possess a technical knack (even if you're missing some of the above), we encourage you to apply. Bounteous x Accolite is focused on promoting an inclusive environment and is proud to be an equal opportunity employer. We celebrate the different viewpoints and experiences our diverse group of team members bring to Bounteous x Accolite. Bounteous x Accolite does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, physical or mental disability, national origin, veteran status, or any other status protected under federal, state, or local law. In addition, you have the opportunity to participate in several Team Member Networks, sometimes referred to as employee resource groups (ERGs), that host space with individuals with shared identities, interests, and passions. Our Team Member Networks celebrate communities of color, life as a working parent or caregiver, the 2SLGBTQIA+ community, wellbeing, and more. Regardless of your respective identity, there are various avenues we involve team members in the Bounteous x Accolite community. Bounteous x Accolite is willing to sponsor eligible candidates for employment visas.
Posted 1 day ago
10.0 - 15.0 years
10 - 15 Lacs
Hyderabad / Secunderabad, Telangana, Telangana, India
On-site
How You Will Fulfill Your Potential Be part of a team, working closely with product development, UX designers, sales, and other engineers Write high quality, we'll-tested, and scalable code Evaluate the short- and long-term implications of every implementation decision Grow professionally and learn from other accomplished software engineers through pair coding while helping your peers to do the same Collaborate on critical system architecture decisions Review and provide feedback on other developers code and design Evaluate modern technologies, prototype innovative approaches to problems and make the business case for change Use infrastructure-as-code to build and deploy cloud-native services in AWS Requirements Experience leading and mentoring junior software engineers in a professional setting Meaningful experience in, but not limited to, any one of the following: Java, C#, Ruby on Rails, GO, Python, AWS (Amazon Web Services), JavaScript, React/Redux. Should include experience of working with non-relational as we'll as relational databases Experience with Data Pipelines, Data Warehouses, Snowflake is a plus Strong knowledge of data structures and algorithms Excellent object oriented or functional analysis and design skills Comfortable multi-tasking, managing multiple stakeholders, and working as part of a global team Proven communication and interpersonal ability Experience building client- and consumer-facing products is a plus, but far from required Knowledge of existing strategic firmwide platforms is a plus, but far from required
Posted 4 days ago
1.0 - 4.0 years
1 - 4 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Drive adoption of cloud technology for data processing and warehousing You will drive SRE strategy for some of GS largest platforms including Lakehouse and Data Lake Engage with data consumers and producers to match reliability and cost requirements You will drive strategy with data Relevant Technologies: Snowflake, AWS, Grafana, PromQL, Python, Java, Open Telemetry, Gitlab BasicQualifications A Bachelor or Masters degree in a computational field (Computer Science, Applied Mathematics, Engineering, or in a related quantitative discipline) 1-4+ years of relevant work experience in a team-focused environment 1-2 years hands on developer experience at some point in career Understanding and experience of DevOps and SRE principles and automation, managing technical and operational risk Experience with cloud infrastructure (AWS, Azure, or GCP) Proven experience in driving strategy with data Deep understanding of multi-dimensionality of data, data curation and data quality, such as traceability, security, performance latency and correctness across supply and demand processes In-depth knowledge of relational and columnar SQL databases, including database design Expertise in data warehousing concepts (eg star schema, entitlement implementations, SQL v/s NoSQL modelling, milestoning, indexing, partitioning) Excellent communications skills and the ability to work with subject matter experts to extract critical business concepts Independent thinker, willing to engage, challenge or learn Ability to stay commercially focused and to always push for quantifiable commercial impact Strong work ethic, a sense of ownership and urgency Strong analytical and problem-solving skills Ability to build trusted partnerships with key contacts and users across business and engineering teams Preferred Qualifications Understanding of Data Lake / Lakehouse technologies incl. Apache Iceberg Experience with cloud databases (eg Snowflake, Big Query) Understanding concepts of data modelling Working knowledge of open-source tools such as AWS lambda, Prometheus Experience coding in Java or Python
Posted 4 days ago
8.0 - 12.0 years
8 - 12 Lacs
Chennai, Tamil Nadu, India
On-site
Accountabilities: Lead the design, development, and deployment of high-performance, scalable data warehouses and data pipelines. Collaborate closely with multi-functional teams to understand business requirements and translate them into technical solutions. Oversee and optimize the use of Snowflake for data storage and analytics. Develop and maintain SQL-based ETL processes. Implement data workflows and orchestrations using Airflow. Apply DBT for data transformation and modeling tasks. Mentor and guide junior data engineers, fostering a culture of learning and innovation within the team. Conduct performance tuning and optimization for both ongoing and new data projects. Confirmed ability to handle large, complex data sets and develop data-centric solutions. Strong problem-solving skills and a keen analytical mentality. Excellent communication and leadership skills, with the ability to work effectively in a team-oriented environment. 8-12 years of experience in data engineering roles, focusing on data warehousing, data integration, and data product development. Essential Skills/Experience: Snowflake SQL Airflow DBT Desirable Skills/Experience: Snaplogic Python Academic Qualifications: Bachelor s or Master s degree in Computer Science, Information Technology, or related field.
Posted 4 days ago
4.0 - 7.0 years
3 - 7 Lacs
Chennai, Tamil Nadu, India
On-site
Accountabilities Strategic Leadership: Evaluate current platforms and lead the design of future-ready solutions, embedding AI-driven efficiencies and proactive interventions. Innovation & Integration: Introduce and integrate AI technologies to enhance ways of working, driving cost-effectiveness and operational excellence. Platform Maturity & Management: Ensure platforms are scalable and compliant, with robust automation and optimized technology stacks. Lead Deliveries: Oversee and manage the delivery of projects, ensuring timely execution and alignment with strategic goals. Thought Leadership: Champion data mesh and product-oriented work methodologies to continuously evolve our data landscapes. Quality and Compliance: Implement quality assurance processes, emphasizing data accuracy and security. Collaborative Leadership: Foster an environment that supports cross-functional collaboration and continuous improvement. Essential Skills/Experience Extensive experience with Snowflake, AI platforms, and cloud infrastructure. Proven track record in thought leadership, platform strategy, and cross-disciplinary innovation. Expertise in AI/GenAI integration with a focus on practical business applications. Strong experience in DataOps, DevOps, and cloud environments such as AWS. Excellent stakeholder management and the ability to lead diverse teams toward innovative solutions. Background in the pharmaceutical sector is a plus.
Posted 4 days ago
5.0 - 7.0 years
5 - 7 Lacs
Noida, Uttar Pradesh, India
On-site
Role & Required Skills: Proven experience in Snowflake. Good experience in SQL and Python. Experience in Data Warehousing. Experience in Data migration from SQL to Snowflake. AWS experience is nice to have. Good communication skills. Responsibilities: Implement business and IT data requirements through new data strategies and designs across all data platforms (relational, dimensional, and NoSQL) and data tools (reporting, visualization, analytics). Work with business and application/solution teams to implement data strategies, build data flows, and develop conceptual/logical/physical data models. Define and govern data modeling and design standards, tools, best practices, and related development for enterprise data models. Identify the architecture, infrastructure, and interfaces to data sources, tools supporting automated data loads, security concerns, analytic models, and data visualization. Hands-on modeling, design, configuration, installation, performance tuning, and sandbox POC. Work proactively and independently to address project requirements and articulate issues/challenges to reduce project delivery risks. Analyze and translate business needs into long-term solution data models. Evaluate existing data systems. Work with the development team to create conceptual data models and data flows. Develop best practices for data coding to ensure consistency within the system. Review modifications of existing systems for cross-compatibility. Implement data strategies and develop physical data models. Update and optimize local and metadata models. Evaluate implemented data systems for variances, discrepancies, and efficiency. Troubleshoot and optimize data systems.
Posted 5 days ago
7.0 - 11.0 years
7 - 11 Lacs
Hyderabad / Secunderabad, Telangana, Telangana, India
On-site
Key Responsibilities: Develop and optimize data pipelines using Spark and Databricks . Write complex SQL queries to analyze and manipulate large datasets. Implement Python-based scripts for data processing and automation. Design and maintain ETL workflows for structured and unstructured data. Collaborate with cross-functional teams to ensure high-performance data architectures. Ensure data quality, governance & security within the pipelines. Mandatory Skills: Strong proficiency in SQL , Python , Spark , and Databricks . Hands-on experience with distributed computing frameworks. Good-to-Have Skills (Optional): Experience with Airflow / Prefect for workflow orchestration. Knowledge of Snowflake for cloud data warehousing. Experience with designing & building frameworks for data processing and/or data quality Experiencewith AWS / Azure / GCP cloud environments. Experience with Data Modeling Exposure to Kafka for real-time data streaming. Experience with NoSQL databases Exposure or Knowledge of Data Visualization tools like Power BI, Google Looker, Tableau, etc. Preferred Qualifications: Bachelor's/Master's degree in Computer Science, Engineering, or related field. Strong analytical and problem-solving skills. Effective communication and teamwork abilities.
Posted 5 days ago
6.0 - 8.0 years
13 - 17 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Experience in SQL Snowflake Creation Data Warehouse Exposure to connect to different data sources and readying data layers for business consumption ETL Snowflake, with extensive experience in data warehouse architecture, management and Snow pipe. User Defined Function vs Stored Procedure, Multi Clusters DWH, Views vs Secure Views & Materialized views in Snowflake. Tuning and how to improve performance in Snowflake. SQL and database development, with the ability to write complex queries and optimize database performance. Programming skills in Python, including experience with data libraries and frameworks. Ability to use varied techniques in data ingestion like unstructured and structured data with API, ODBC etc. Ability to architect ETL flows for ingestion. Understanding of data modeling principles and best practices. Implement robust, scalable data pipelines and architectures in Snowflake, optimizing data flow and collection for cross functional teams. Complex SQL queries and scripts to support data transformation, aggregation, and analysis. Python for automating data processes, integrating data systems, and building advanced analytics models, data modeling initiatives, ensuring the integrity and efficiency of data structures. With cross functional teams to understand data needs, gather requirements, and deliver data driven solutions that align with business goals. Data security and compliance measures, adhering to industry standards and regulations. Data analysis and provide insights to support decision making and strategy development. Abreast of emerging technologies and trends in data engineering, recommending, and implementing improvements to our data systems and processes. Implementing projects using Agile methodologies with adequate exposure to JIRA, Slack usage.
Posted 5 days ago
6.0 - 8.0 years
10 - 14 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Experience in SQL Snowflake Creation Data Warehouse Exposure to connect to different data sources and readying data layers for business consumption ETL Snowflake, with extensive experience in data warehouse architecture, management and Snow pipe. User Defined Function vs Stored Procedure, Multi Clusters DWH, Views vs Secure Views & Materialized views in Snowflake. Tuning and how to improve performance in Snowflake. SQL and database development, with the ability to write complex queries and optimize database performance. Programming skills in Python, including experience with data libraries and frameworks. Ability to use varied techniques in data ingestion like unstructured and structured data with API, ODBC etc. Ability to architect ETL flows for ingestion. Understanding of data modeling principles and best practices. Implement robust, scalable data pipelines and architectures in Snowflake, optimizing data flow and collection for cross functional teams. Complex SQL queries and scripts to support data transformation, aggregation, and analysis. Python for automating data processes, integrating data systems, and building advanced analytics models, data modeling initiatives, ensuring the integrity and efficiency of data structures. With cross functional teams to understand data needs, gather requirements, and deliver data driven solutions that align with business goals. Data security and compliance measures, adhering to industry standards and regulations. Data analysis and provide insights to support decision making and strategy development. Abreast of emerging technologies and trends in data engineering, recommending, and implementing improvements to our data systems and processes. Implementing projects using Agile methodologies with adequate exposure to JIRA, Slack usage.
Posted 5 days ago
4.0 - 8.0 years
8 - 12 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Perform advanced analytics like cohort analysis, scenario analysis, time series analysis, and predictive analysis, creating powerful visualizations to communicate the results. Articulate assumptions, analyses and interpretations of data in a variety of modes. Engage with stakeholders to understand business requirements and develop detailed project plans with timelines. Design data models that define how the tables, columns, and data elements from different sources are connected and stored, based on our reporting and analytics requirements. Work closely with BI engineers to develop efficient, highly performant, scalable reporting and analytics solutions. Query data from warehouses like Snowflake using SQL. Validate and QA data to ensure consistent data accuracy and quality. Expert level skills writing complex SQL queries to create views in warehouses like Snowflake, Redshift, SQL Server, Oracle, Big Query. Advanced skills in designing and creating data models and dashboards in BI tools like Tableau, Domo, Looker, etc. Intermediate level skills in analytical tools like Excel, Google Sheets, or Power BI (complex formulas, lookups, pivots, etc.) Experience in Python programming / Developer Bachelor's/Advanced degree in Data Analytics, Data Science, Information Systems, Computer Science, Applied Math, Statistics, or similar field of study. Willingness to work with internal team members and stakeholders in other time zones. Needs to have good understanding about infra (VM Ware, RHV), Database (Oracle/Postgres etc.) Experience in OSS product portfolio such as HPE vTEMIP, UCA, UTM, IBM Tivoli, Netcool etc.
Posted 5 days ago
5.0 - 7.0 years
5 - 7 Lacs
Noida, Uttar Pradesh, India
On-site
Role & Required Skills: Proven experience in Snowflake. Good experience in SQL and Python. Experience in Data Warehousing. Experience in Data migration from SQL to Snowflake. AWS experience is nice to have. Good communication skills. Responsibilities: Implement business and IT data requirements through new data strategies and designs across all data platforms (relational, dimensional, and NoSQL) and data tools (reporting, visualization, analytics). Work with business and application/solution teams to implement data strategies, build data flows, and develop conceptual/logical/physical data models. Define and govern data modeling and design standards, tools, best practices, and related development for enterprise data models. Identify the architecture, infrastructure, and interfaces to data sources, tools supporting automated data loads, security concerns, analytic models, and data visualization. Hands-on modeling, design, configuration, installation, performance tuning, and sandbox POC. Work proactively and independently to address project requirements and articulate issues/challenges to reduce project delivery risks. Analyze and translate business needs into long-term solution data models. Evaluate existing data systems. Work with the development team to create conceptual data models and data flows. Develop best practices for data coding to ensure consistency within the system. Review modifications of existing systems for cross-compatibility. Implement data strategies and develop physical data models. Update and optimize local and metadata models. Evaluate implemented data systems for variances, discrepancies, and efficiency. Troubleshoot and optimize data systems.
Posted 5 days ago
5.0 - 7.0 years
5 - 7 Lacs
Hyderabad / Secunderabad, Telangana, Telangana, India
On-site
Role & Required Skills: Proven experience in Snowflake. Good experience in SQL and Python. Experience in Data Warehousing. Experience in Data migration from SQL to Snowflake. AWS experience is nice to have. Good communication skills. Responsibilities: Implement business and IT data requirements through new data strategies and designs across all data platforms (relational, dimensional, and NoSQL) and data tools (reporting, visualization, analytics). Work with business and application/solution teams to implement data strategies, build data flows, and develop conceptual/logical/physical data models. Define and govern data modeling and design standards, tools, best practices, and related development for enterprise data models. Identify the architecture, infrastructure, and interfaces to data sources, tools supporting automated data loads, security concerns, analytic models, and data visualization. Hands-on modeling, design, configuration, installation, performance tuning, and sandbox POC. Work proactively and independently to address project requirements and articulate issues/challenges to reduce project delivery risks. Analyze and translate business needs into long-term solution data models. Evaluate existing data systems. Work with the development team to create conceptual data models and data flows. Develop best practices for data coding to ensure consistency within the system. Review modifications of existing systems for cross-compatibility. Implement data strategies and develop physical data models. Update and optimize local and metadata models. Evaluate implemented data systems for variances, discrepancies, and efficiency. Troubleshoot and optimize data systems.
Posted 5 days ago
1.0 - 3.0 years
2 - 4 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
This position in Global Parts Supply (GPS) Aftermarket Data Analytics team (ADA) will be responsible to define, develop & maintain standard reports for a wider supply base through collaboration using safe source of data. This in turn help the management to visualize daily/weekly/monthly positions and trends of key business driving functionalities like Past Due, Backorder, Supplier Shipping Performance, etc. Gather user requirements, collect, and document existing Safe Source data (tables / flat files), Identify Safe Source to establish & maintain Fact & Dimension tables for Business with TACCSS (Transactional, Accurate, Clean, Complete, Safe source & Standardized) data and establish dataflow diagrams. Eager to learn & improve, continuously hones their skills. Strong solution-oriented analytic skills and shows creativity in their problem-solving. Has great attention to detail without losing sight of the big picture. Has great communication skills - Is able to represent the architect team, address their needs & requirements in meetings with process partners (inside & outside of the greater team). Must be willing to work in different shifts to have enough overlap with global team. Desired Qualifications: APICS CPIM or CSCP certification. Python, R, SnowSQL. Experience with data mining and data analytics in large data sets. Power BI report development. Ability to automate programs / processes. SnowFlake what we want: Strong knowledge in SQL - Joins, Subqueries, CTEs. Developing Stored Procedures & User defined functions. Understanding of SnowFlake features. Understanding of SnowFlake functions like - Window functions, Aggregate functions, Data & time functions, String functions. Troubleshooting existing SQL Queries. Creating Fact & Dimension Views.
Posted 5 days ago
4.0 - 9.0 years
4 - 9 Lacs
Gurgaon / Gurugram, Haryana, India
On-site
As a Data engineer in the ADC Engineering team, you will: - Work alongside our engineers to help design and build scalable data pipelines while evolving the data surface. Help prove out and productionize Cloud Native Infrastructure and tooling to support scalable data cloud. Have fun as part of an awesome team. Specific Responsibilities: Its a mix of backend application engineering (Python backend) including data engineering to build solution leveraging existing Data Framework Collaborating in a multi-disciplinary squad involving program and product managers, data scientists, and client professionals to expand the product offering based on business impact and demand Be involved from inception of projects, understanding requirements, designing & developing solutions, and incorporating them into the designs of our platforms Maintain excellent knowledge of the technical landscape for data & cloud tooling Assist in troubleshooting issues, support the operation of production software Write technical documentation Required Skills 4+ years of industry experience in data engineering area. Passion for engineering and optimizing data sets, data pipelines and architecture. Ability to build processes that support data transformation, workload management, data structures, lineage, and metadata. Knowledge of SQL and performance tuning. Experience with Snowflake is preferred. Good working knowledge languages such as Python/Java Understanding of software deployment and orchestration technologies such as airflow etc. Experience in creating and evolving CI/CD pipelines with Gitlab or Azure Data Ops.
Posted 5 days ago
4.0 - 9.0 years
4 - 9 Lacs
Gurgaon / Gurugram, Haryana, India
On-site
As an Associate Data Steward, your responsibilities will span several key areas: Business & Strategic Acumen: You will collaborate closely with business units to understand evolving data requirements and align data products to meet strategic goals and objectives. You will ensure that data products support various use cases, such as operational efficiencies, risk management, and commercial applications, while defining success criteria for data offerings in collaboration with key stakeholders. Data Governance & Quality: A core aspect of this role is managing data quality through the application of robust data governance controls. You will be responsible for monitoring data health, implementing data quality metrics, and ensuring that data products meet established standards for accuracy, completeness, and consistency. Regular assessments of data sources and processes will be part of your ongoing responsibilities to identify deficiencies and opportunities for improvement. Data Product Lifecycle Management: You will support the full delivery lifecycle of data products, from ideation to release. This includes working with cross-functional teams such as product managers, engineers, and business stakeholders to plan, design, and deliver data products. In addition, you will contribute to the design and creation of conceptual, logical, and physical data models to ensure that data products meet business requirements. Requirements Gathering & Documentation: You will be actively involved in gathering, defining, and documenting business requirements for data products. This includes translating business needs into detailed data requirements and user stories for development teams. You will work to break down complex data problems into manageable tasks, ensuring alignment between technical and business requirements. Testing & Quality Assurance: During the testing phase of data product development, you will collaborate with engineering and quality assurance teams to validate that data is accurately extracted, transformed, and loaded. Ensuring that data governance controls are applied during testing is also part of your role, and you will help resolve any issues that arise. Vendor & Stakeholder Management: You will manage relationships with external data vendors to ensure that data feeds meet business requirements and quality standards. Additionally, you will work with both internal and external stakeholders to ensure that data products align with organizational goals and address customer needs. Regular engagement with stakeholders will be key to soliciting feedback on data products and identifying opportunities for enhancement. Data Stewardship Support: In addition to data management, you will provide Level 3 support for complex data-related inquiries and issues. You will proactively identify data challenges and offer data-driven solutions to meet business objectives. You will also participate in data governance initiatives, helping to define and implement best practices for data stewardship across the organization. Collaboration & Communication: You will communicate effectively with both technical and non-technical teams, ensuring that complex data concepts are conveyed clearly. Your collaboration with internal and external teams will ensure that data solutions align with business goals and industry best practices. You will be expected to work in an agile environment, managing multiple priorities to ensure efficient and timely data product delivery. Qualifications & Requirements: The ideal candidate will possess the following qualifications: Experience: At least 4 years of experience in data stewardship, data governance, or a related field. Experience in the financial services industry is a plus, but not required. A strong background in data modeling (logical, conceptual, physical), data governance, and data quality management is essential. Technical Skills: Proficiency in data management tools and technologies such as SQL,, Unix,, Tableau, etc. Familiarity with data governance platforms (e.g., Aha!, ServiceNow, Erwin Data Modeling, DataHub) and methodologies for data management and quality assurance is preferred. Knowledge of databases (Relational, NoSQL, Graph) and cloud-based data platforms (e.g., Snowflake) is also beneficial. Business & Communication Skills: Strong business acumen and the ability to align data products with both organizational and client needs. You should be able to effectively communicate complex technical concepts to both technical and non-technical stakeholders. Strong organizational skills and the ability to manage multiple tasks and priorities in an agile environment are essential.
Posted 5 days ago
15.0 - 18.0 years
15 - 18 Lacs
Hyderabad / Secunderabad, Telangana, Telangana, India
On-site
Here's the job description for a Data Governance Architect, formatted with bold subheadings and bullet points: Role Overview: Data Governance Architect As a Data Governance Architect, your work is a combination of hands-on contribution, customer engagement, and technical team management. Overall, you'll design, architect, deploy, and maintain big data-based data governance solutions. Key Responsibilities Project Lifecycle Management: Provide technical leadership across the full life cycle of big data-based data governance projects, from requirement gathering and analysis to platform selection, architecture design, and deployment. Cloud Scalability: Scale the data governance solution within a cloud-based infrastructure. Cross-functional Collaboration: Collaborate effectively with business consultants, data scientists, engineers, and developers to deliver robust data solutions. Technology Exploration: Explore and evaluate new technologies for creative business problem-solving in the data governance space. Team Leadership: Lead and mentor a team of data governance engineers. What We Expect (Requirements) Experience:10+ years of technical experience in the Data space. 5+ years of experience in the Hadoop ecosystem. 3+ years of experience specifically in Data Governance Solutions. Hands-on Data Governance Solutions Experience with a good understanding of:Data Catalog Business Glossary Business metadata, technical metadata, operational Metadata Data Quality Data Profiling Data Lineage Hands-on experience with the following technologies:Hadoop ecosystem: HDFS, Hive, Sqoop, Kafka, ELK Stack, etc. Programming Languages: Spark, Scala, Python, and core/advanced Java. Cloud Components: Relevant AWS/GCP components required to build big data solutions. Good to Know: Databricks, Snowflake. Familiarity working with:Designing/building large cloud-computing infrastructure solutions (in AWS/GCP). Data lake design and implementation. Full life cycle of a Hadoop solution. Distributed computing and parallel processing environments. HDFS administration, configuration management, monitoring, debugging, and performance tuning
Posted 5 days ago
5.0 - 10.0 years
5 - 10 Lacs
Gurgaon / Gurugram, Haryana, India
On-site
Key Responsibilities End-to-End Ownership Architect, develop, and maintain full-stack systems with a focus on modularity, performance, and testability. Balance technical trade-offs, present design options, and align with long-term platform goals. Front-End Engineering Build dynamic, performant UIs using React, TypeScript, and modern state management . Create reusable components and design systems that enable self-service analytics and intuitive user flows. Back-End Engineering Design robust services using Java 8+, Kafka, and REST APIs to support data processing, aggregation, and computation. Implement efficient data access patterns and schema design for analytical workloads (SQL, Snowflake). Quantitative Domain Integration Work alongside quant researchers and risk experts to productize complex financial models. Build tools to visualize, audit, and troubleshoot model outputs and portfolio analytics. DevOps Quality Write thorough unit and integration tests (Jest, Enzyme, JMockit). Participate in CI/CD pipelines and help maintain a culture of operational excellence. Team Collaboration Operate in an agile environment with stakeholders across time zones. Mentor junior engineers and contribute to continuous improvement in engineering practices. What We re Looking ForCore Skills: Frontend: 5+ years with React, TypeScript, ES6+, CSS/SCSS, Bootstrap (or equivalents). Backend: Deep knowledge of Java (Java 8+), REST APIs, Kafka, Maven. Database: Proficiency with SQL, schema design, Snowflake (preferred), performance tuning. Testing: Solid grasp of modern testing tools and philosophies (Jest, Enzyme, JMockit). DevOps: Git, shell scripting, CI/CD pipelines, basic containerization knowledge. Bonus Points For: Python for scripting or quick data modeling. Knowledge of financial modeling concepts, especially around fixed income or derivatives. Experience with scalable data systems and distributed computing patterns. Familiarity with modern UI/UX principles and accessibility practices. Why Join BlackRock Impact the core infrastructure powering global financial analytics and decisions. Work on deep technical problems with direct business relevance. Be part of a team that invests in your learning, mentorship, and career development . Collaborate across disciplines engineering, data science, and finance to build world-class tools.
Posted 5 days ago
10.0 - 13.0 years
10 - 13 Lacs
Hyderabad / Secunderabad, Telangana, Telangana, India
On-site
Job Summary: Position Summary: Full Stack Engineer: Data Engineering- Job Description Cigna, a leading Health Services company, is looking for an exceptional engineer in our Data & Analytics Engineering organization. The Full Stack Engineer is responsible for the delivery of a business need starting from understanding the requirements to deploying the software into production. This role requires you to be fluent in some of the critical technologies with proficiency in others and have a hunger to learn on the job and add value to the business. Critical attributes of being a Full Stack Engineer, among others, is ownership, eagerness to learn & an open mindset. In addition to Delivery, the Full Stack Engineer should have an automation first and continuous improvement mindset. Person should drive the adoption of CI/CD tools and support the improvement of the tools sets/processes. Job Description & Responsibilities: Behaviors of a Full Stack Engineer: Full Stack Engineers are able to articulate clear business objectives aligned to technical specifications and work in an iterative, agile pattern daily. They have ownership over their work tasks, and embrace interacting with all levels of the team and raise challenges when necessary. We aim to be cutting-edge engineers - not institutionalized developers. Experience Required: 11 - 13 years of experience in Python. 11 - 13 years of experience in Data Management & SQL expertise. 5+ years in Spark and AWS. 5+ years in Databrick. Experience with working in agile CI/CD environments. Experience Desired: Git ,Teradata & Snowflake experience. Experience working on Analytical Models and their deployment / production enable ment via data & analytical pipelines. Expertise with big data technologies - Hadoop, HiveQL, (Scala/Python) Expertise on Cloud technologies - (S3, Glue, Terraform, Lambda, Aurora, Redshift, EMR). Experience with BDD and TDD development methodologies. Health care information domains preferred. Education and Training Required: Bachelor s degree (or equivalent) required. Primary Skills: Python, AWS and Spark. CI/CD , Databrick. Data management and SQL. Additional Skills: Strong communication skills. Take ownership and accountability. Write referenceable & modular code Be fluent areas and have proficiency in many areas Have a passion to learn. Have a quality mindset, not just code quality but also to ensure ongoing data quality by monitoring data to identify problems before they have business impact. Take risks and champion new ideas.
Posted 5 days ago
9.0 - 14.0 years
9 - 14 Lacs
Gurgaon / Gurugram, Haryana, India
On-site
Data is at the core of the Aladdin platform, and increasingly, our ability to consume, store, analyze, and gain insight from data is a keycomponentof what differentiates us. As part of Aladdin Studio, The Aladdin Data Cloud (ADC) Engineering teamis responsible forbuilding andmaintainingdata-as-a-service solution for all the data management and transformation needs. We engineer high performance data pipelines, provide a fabric to discover and consume data, and continually evolve our data surface capabilities. As aData engineer in theADCEngineering team,you will: - Work alongside our engineers to help design and build scalable data pipelines while evolving the data surface. Help prove out anddeliverCloud Native Infrastructure and tooling to support scalabledata cloud. Have fun as part of anamazingteam. Specific Responsibilities: Leading and working as part of a multi-disciplinary squad toestablishour next generation of data pipelines and tools. Be involved frominceptionof projects, understanding requirements, designing&developing solutions,and incorporating them into the designs of our platforms. Mentor team members on technology andstandard processes. Maintainexcellent knowledge of the technical landscapefor data & cloud tooling Assistinsolvingissues, support the operation of production software. Designsolutions and document it. Desirable Skills 8+years of industry experience in data engineering area. Passionfor engineeringandoptimizing data sets, data pipelines and architecture . Ability to build processes that support data transformation, workload management, data structures,lineage,and metadata. Knowledge of SQL and performance tuning.Experience with Snowflake is preferred. Good understandingoflanguages such as Python/Java Understandingofsoftware deployment and orchestration technologies such asairflowetc. Experience withdbtishelpful. Working knowledge of building and deploying distributed systems Experience in creating and evolving CI/CD pipelines with GitLab orAzure Dev Ops . Experience in handling multi-disciplinaryteamand mentoring them.
Posted 5 days ago
4.0 - 9.0 years
4 - 9 Lacs
Gurgaon / Gurugram, Haryana, India
On-site
Key Responsibilities: Design, develop, and maintain ETL processes and workflows across 3rd-party data feeds, CRM data exhaust, and industry data / insights. Optimize ETL processes for performance and scalability. Maintain and update client data repositories to ensure accuracy and completeness, focusing on timely onboarding, QA, and automation of priority 3rd-party data feeds Collaborate with USWA Client Data Management, data scientists, and business stakeholders to understand data requirements, improve data quality, and reduce time to market when new data feeds are received. Partner with the central Aladdin engineering team on platform-level engagements (eg, Global Client Business Client Data Platform) Ensure the seamless flow of data between internal and external systems. Fix and resolve ETL job failures and data discrepancies. Document ETL processes and maintain technical specifications. Work with data architects to model data and ingest into data warehouses (eg, Client Data Platform) Engage in code reviews and follow development standard methodologies. Stay updated with emerging ETL technologies and methodologies. Qualifications: Bachelors or masters Degree in Computer Science, Engineering or a related field. Eligibility Criteria: 4+ years of experience in ETL development and data warehousing. Proficiency in ETL tools such as Informatica, Talend, or SSIS. Experience with building ETL processes for cloud-based data warehousing solutions (eg, Snowflake). In-depth working knowledge of Python programming language including libraries for data structures, reporting templates etc Extensive knowledge of writing Python data processing scripts and executing multiple scripts via batch processing. Proficiency in programming languages like Python, Java, or Scala. Experience with database management systems like SQL Server, Oracle, or MySQL. Knowledge of data modeling and data architecture principles. Excellent problem-solving and analytical skills. Ability to work in a fast-paced and dynamic environment. Strong communication skills and ability to work as part of a team. Experience in the asset management or financial services sector is a plus.
Posted 5 days ago
14.0 - 15.0 years
14 - 15 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Candidates will : Work on cutting-edge Cloud Technologies, AI/ML, and data-driven solutions, be a part of a dynamic and innovative team driving digital transformation. Lead high-impact Agile initiatives with top talent in the industry. Get opportunity to grow and implement Agile at an enterprise level. Offered competitive compensation, flexible work culture, and learning opportunities. Roles and Responsibilities Create product roadmap and project plan. Design, develop, and maintain scalable ETL pipelines using Azure Services to process, transform, and load large datasets into Cloud platforms. Collaborate with cross-functional teams, including data architects, analysts, and business stakeholders, to gather data requirements and deliver efficient data solutions. Design, implement, and maintain data pipelines for data ingestion, processing, and transformation in Azure. Work together with data scientists/architects and analysts to understand the needs for data and create effective data workflows. Exposure to Snowflake Warehouse. Big Data Engineer with solid background with the larger Hadoop ecosystem and real-time analytics tools including PySpark/Scala-Spark/Hive/Hadoop CLI / MapReduce / Storm / Kafka / Lambda Architecture Implementing data validation and cleansing techniques. Improve the scalability, efficiency, and cost-effectiveness of data pipelines. Experience in designing and hands-on development in cloud-based analytics solutions. Expert level understanding on Azure Data Factory Azure Data Lake, Snowflake, Pyspark is required. Good to have exp in full Stack Development background with Java and JavaScript/CSS/HTML. Knowledge of ReactJs/Angular is a plus. Designing and building of data pipelines using API ingestion and Streaming ingestion methods. Unix/Linux expertise; comfortable with Linux operating system and Shell Scripting. Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is desirable. PL/SQL, RDBMS background with Oracle/MySQL Comfortable with microServices, CI/CD, Dockers, and Kubernetes Strong experience in common Data Vault data warehouse modelling principles. Creating/modifying Dockers and deploying them via Kubernetes. Additional Skills Required: The ideal candidate should have at least 14+ years of experience in IT along in addition to the following: Having 10+ years of extensive development experience using snowflake or similar data warehouse technology Having working experience with dbt and other technologies of the modern datastack, such as Snowflake, Azure, Databricks and Python, Experience in agile processes, such as SCRUM Extensive experience in writing advanced SQL statements and performance tuning. Experience in Data Ingestion techniques using custom or SAAS tool Experience in data modelling and can optimize existing/new data models Experience in data mining, data warehouse solutions, and ETL, and using databases in a business environment with large-scale, complex datasets Technical Qualifications: Preferred: Bachelors degree in Computer Science, Information Systems, or a related field. Experience in high-tech, software, or telecom industries is a plus. Strong analytical skills to translate insights into impactful product initiatives.
Posted 5 days ago
2.0 - 4.0 years
2 - 4 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Position Summary: The Platform and Data Services team is looking to hire a Member of Technical Staff (MTS) to join our Data View team in Bangalore. The ideal candidate will have experience developing and deploying software and services in a public cloud environment. About Platform and Data Services: The Platform and Data Services R&D team within athenahealth works on unleashing the potential of large-volume, high-quality healthcare data within our ecosystem. This team works on products and frameworks ranging from creating a federated Data Lake to serving as a warehouse for reporting, analytics, and data science teams. Data governance models for creating well-reviewed data models, uniform report generation, and data lineage are also key focus areas for this organization. The Team: Data View is a highly successful product offering that enables our customers to get raw and transformed data in bulk to power their analytics and integration needs. Transforms are run on the Snowflake-based Data Lake, which is refreshed daily using Oracle Golden Gate change data capture pipelines. Job Responsibilities: Write high-quality code, considering cloud deployment aspects like HA/DR, scale, performance, and security. Write unit, functional, and integration tests to maintain code hygiene as per organizational standards. Participate in self and peer code reviews to ensure the product meets required quality standards. Adhere to the Definition of Done (DoD) during sprint activities, including relevant documentation. Collaborate with product managers, architects, and other stakeholders to build world-class products. Continuously learn and iterate on modern technologies in related areas. Typical Qualifications: 3 to 5 years of experience in a software engineering role Demonstrated progressive software development experience with the ability to work with minimal supervision Bachelor's degree in Engineering (Computer Science) or equivalent Proficiency in Python or similar programming languages Hands-on experience with SQL database design, optimization, and tuning Working knowledge of Unix/Linux platforms Experience with data warehouses like Snowflake, Hadoop, etc. Experience with Postgres and its internals is a strong plus Experience in deploying and maintaining services in a public cloud (AWS preferred)
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2